Non-parametric Residual Variance Estimation in Supervised Learning
نویسندگان
چکیده
The residual variance estimation problem is well-known in statistics and machine learning with many applications for example in the field of nonlinear modelling. In this paper, we show that the problem can be formulated in a general supervised learning context. Emphasis is on two widely used non-parametric techniques known as the Delta test and the Gamma test. Under some regularity assumptions, a novel proof of convergence of the two estimators is formulated and subsequently verified and compared on two meaningful study cases.
منابع مشابه
A general procedure for learning mixtures of independent component analyzers
This paper presents a new procedure for learning mixtures of independent component analyzers. The procedure includes non-parametric estimation of the source densities, supervised-unsupervised learning of the model parameters, incorporation of any independent component analysis (ICA) algorithm into the learning of the ICA mixtures, and estimation of residual dependencies after training for corre...
متن کاملUnsupervised Feature Learning via Non-Parametric Instance Discrimination
Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so. We study whether this observation can be extended beyond the conventional domain of supervised learning: Can we learn a good feature representation that captures apparent similarity among instances, instead of classes, by merely asking ...
متن کاملSemi-supervised kernel density estimation for video annotation
Insufficiency of labeled training data is a major obstacle for automatic video annotation. Semi-supervised learning is an effective approach to this problem by leveraging a large amount of unlabeled data. However, existing semi-supervised learning algorithms have not demonstrated promising results in largescale video annotation due to several difficulties, such as large variation of video conte...
متن کاملUnsupervised Risk Estimation with only Structural Assumptions
Given a model θ and unlabeled samples from a distribution p∗, we show how to estimate the labeled risk of θ while only making structural (i.e., conditional independence) assumptions about p∗. This lets us estimate a model’s test error on distributions very different than its training distribution, thus performing unsupervised domain adaptation even without assuming the true predictor remains co...
متن کاملStacked Density Estimation
In this paper, the technique of stacking, previously only used for supervised learning, is applied to unsupervised learning. Speci cally, it is used for non-parametric multivariate density estimation, to combine nite mixture model and kernel density estimators. Experimental results on both simulated data and real world data sets clearly demonstrate that stacked density estimation outperforms ot...
متن کامل